Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 49
Filter
Add filters

Journal
Document Type
Year range
1.
ACM Web Conference 2023 - Companion of the World Wide Web Conference, WWW 2023 ; : 306-309, 2023.
Article in English | Scopus | ID: covidwho-20244950

ABSTRACT

In recent years, the use of bicycle as a healthy and economical means of transportation has been promoted worldwide. In addition, with the increase in bicycle commuting due to the COVID-19, the use of bicycles are attracting attention as a last-mile means of transportation in Mobility as a Service(MaaS). To help ensure a safe and comfortable ride using a smartphone mounted on a bicycle, this study focuses on analyzing facial expressions while riding to determine potential comfort along the route with the surrounding environment and to provide a map that users can explicitly feedback(FB) after riding. Combining the emotions of facial expressions while riding and FB, we annotate comfort to different locations. Afterwards, we verify the relationship between locations with high level of comfort based on the acquired data and the surrounding environment of those locations using Google Street View(GSV). © 2023 Owner/Author.

2.
ACM International Conference Proceeding Series ; 2022.
Article in English | Scopus | ID: covidwho-20243125

ABSTRACT

Facial expression recognition (FER) algorithms work well in constrained environments with little or no occlusion of the face. However, real-world face occlusion is prevalent, most notably with the need to use a face mask in the current Covid-19 scenario. While there are works on the problem of occlusion in FER, little has been done before on the particular face mask scenario. Moreover, the few works in this area largely use synthetically created masked FER datasets. Motivated by these challenges posed by the pandemic to FER, we present a novel dataset, the Masked Student Dataset of Expressions or MSD-E, consisting of 1,960 real-world non-masked and masked facial expression images collected from 142 individuals. Along with the issue of obfuscated facial features, we illustrate how other subtler issues in masked FER are represented in our dataset. We then provide baseline results using ResNet-18, finding that its performance dips in the non-masked case when trained for FER in the presence of masks. To tackle this, we test two training paradigms: contrastive learning and knowledge distillation, and find that they increase the model's performance in the masked scenario while maintaining its non-masked performance. We further visualise our results using t-SNE plots and Grad-CAM, demonstrating that these paradigms capitalise on the limited features available in the masked scenario. Finally, we benchmark SOTA methods on MSD-E. The dataset is available at https://github.com/SridharSola/MSD-E. © 2022 ACM.

3.
2022 ACM International Joint Conference on Pervasive and Ubiquitous Computing and the 2022 ACM International Symposium on Wearable Computers, UbiComp/ISWC 2022 ; : 216-220, 2022.
Article in English | Scopus | ID: covidwho-2326524

ABSTRACT

Work stress impacts people's daily lives. Their well-being can be improved if the stress is monitored and addressed in time. Attaching physiological sensors are used for such stress monitoring and analysis. Such approach is feasible only when the person is physically presented. Due to the transfer of the life from offline to online, caused by the COVID-19 pandemic, remote stress measurement is of high importance. This study investigated the feasibility of estimating participants' stress levels based on remote physiological signal features (rPPG) and behavioral features (facial expression and motion) obtained from facial videos recorded during online video meetings. Remote physiological signal features provided higher accuracy of stress estimation (78.75%) as compared to those based on motion (70.00%) and facial expression (73.75%) features. Moreover, the fusion of behavioral and remote physiological signal features increased the accuracy of stress estimation up to 82.50%. © 2022 Owner/Author.

4.
2nd International Conference on Sustainable Computing and Data Communication Systems, ICSCDS 2023 ; : 770-773, 2023.
Article in English | Scopus | ID: covidwho-2325493

ABSTRACT

Though many facial emotion recognition models exist, after the Covid-19 pandemic, majority of such algorithms are rendered obsolete as everybody is compelled to wear a facemask to protect themselves against the deadly virus. Face masks can hinder emotion recognition systems, as crucial facial features are not visible in the image. This is because facemasks cover essential parts of the face such as the mouth, nose, and cheeks which play an important role in differentiating between various emotions. This study intends to recognize the emotional states of anger-disgust, neutral, surprise-fear, joy, sadness, of the person in the image with a face mask. In the proposed method, a CNN model is trained using images of people wearing masks. To achieve higher accuracy, the classes in the dataset are combined. Different combinations of clubbing are performed, and results are recorded. Images are taken from FER2013 dataset which consists of a huge number of manually annotated facial images of people. © 2023 IEEE.

5.
Texto Livre-Linguagem E Tecnologia ; 16, 2023.
Article in English | Web of Science | ID: covidwho-2308089

ABSTRACT

The use of pandemic masks is one of the main behavioral changes brought about by the COVID-19 pandemic, which has possibly hampered the Facial Emotion Recognition (FER). This systematic review aims to gather and compare methodologies and results of experiments, published between 2019 and 2022, that assess the impact of pandemic masks on FER. Therefore, this study has been based and divided on the PRISMA recommendations, in three stages: identification, screening, and eligibility. The first step was dedicated to the choice of descriptors, the time frame and their application in the chosen databases. In the second step, titles, s, and keywords were read to select articles that meet the inclusion criteria. The articles selected at this stage were placed on the Connected Papers platform, in order to explore unidentified references via databases. In the last phase, the full reading and the synthesis of the studies were carried out. Finally, 11 articles have been elected, whose results showed that pandemic masks harm the FER in a heterogeneous way. Expressions such as happiness and disgust, which depend on the region of the mouth to be discriminated, are harmed. Sadness is also undermined by pandemic masks, often getting confused with neutral faces and vice versa. For the findings to be more generalizable, future studies need to adopt standardized tasks with all basic expressions and include non-basic expressions such as shame. In addition, the implementation of dynamic stimuli with ethnic variation and control over exposure time are recommended.

6.
2023 International Conference on Intelligent Systems, Advanced Computing and Communication, ISACC 2023 ; 2023.
Article in English | Scopus | ID: covidwho-2293883

ABSTRACT

Depression is a common mental problem that can fundamentally affect individuals' emotional wellness as well as their everyday lives. After COVID-19 other pandemics and subsequent social isolation this issue is more potent than ever. Numerous research works have been going on searching for methods that effectively recognize depression in order to detect depression. In this regard, a number of studies have been proposed. In this study, it examines a number of previous ones utilizing various Machine Learning (ML) and Artificial Intelligence (AI) methods for depression detection. In addition, various methods for determining an individual's mood and emotion are discussed. This study also discusses how facial expression, voice, gesture can be understood by chatbot and classified it as a depressed person or not. Addition to this, it reviews all the related research works and evaluates their methods to detect depression. © 2023 IEEE.

7.
25th International Conference on Advanced Communications Technology, ICACT 2023 ; 2023-February:411-416, 2023.
Article in English | Scopus | ID: covidwho-2305851

ABSTRACT

Due to COVID-19, wearing masks has become more common. However, it is challenging to recognize expressions in the images of people wearing masks. In general facial recognition problems, blurred images and incorrect annotations of images in large-scale image datasets can make the model's training difficult, which can lead to degraded recognition performance. To address this problem, the Self-Cure Network (SCN) effectively suppresses the over-fitting of the network to images with uncertain labeling in large-scale facial expression datasets. However, it is not clear how well the SCN suppresses the uncertainty of facial expression images with masks. This paper verifies the recognition ability of SCN on images of people wearing masks and proposes a self-adjustment module to further improve SCN (called SCN-SAM). First, we experimentally demonstrate the effectiveness of SCN on the masked facial expression dataset. We then add a self-adjustment module without extensive modifications to SCN and demonstrate that SCN-SAM outperforms state-of-the-art methods in synthetic noise-added FER datasets. © 2023 Global IT Research Institute (GiRI).

8.
Lecture Notes in Networks and Systems ; 551:579-589, 2023.
Article in English | Scopus | ID: covidwho-2296254

ABSTRACT

E-learning system advancements give students new opportunities to better their academic performance and access e-learning education. Because it provides benefits over traditional learning, e-learning is becoming more popular. The coronavirus disease pandemic situation has caused educational institution cancelations all across the world. Around all over the world, more than a billion students are not attending educational institutions. As a result, learning criteria have taken on significant growth in e-learning, such as online and digital platform-based instruction. This study focuses on this issue and provides learners with a facial emotion recognition model. The CNN model is trained to assess images and detect facial expressions. This research is working on an approach that can see real-time facial emotions by demonstrating students' expressions. The phases of our technique are face detection using Haar cascades and emotion identification using CNN with classification on the FER 2013 datasets with seven different emotions. This research is showing real-time facial expression recognition and help teachers adapt their presentations to their student's emotional state. As a result, this research detects that emotions' mood achieves 62% accuracy, higher than the state-of-the-art accuracy while requiring less processing. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

9.
Asian Journal of Social Psychology ; 2023.
Article in English | Scopus | ID: covidwho-2295905

ABSTRACT

We tested the effect of mask use and other-race effect on (a) face recognition, (b) recognition of facial expressions, and (c) social distance. Caucasian subjects were tested in a matching-to-sample paradigm with either masked or unmasked Caucasian and Asian faces. The participants exhibited the best performance in recognizing an unmasked face condition and the poorest to recognize a masked face that they had seen earlier without a mask. Accuracy was poorer for Asian faces than Caucasian faces. The second experiment presented Asian or Caucasian faces having emotional expressions, with and without masks. The participants' emotion recognition performance decreased for masked faces. From the most accurately to least accurately recognized emotions were as follows: happy, neutral, disgusted, fearful. Performance was poorer for Asian stimuli compared to Caucasian. In Experiment 3 the same participants indicated the social distance they would prefer with each pictured person. They preferred a wider distance with unmasked faces compared to masked faces. Distance from farther to closer was as follows: disgusted, fearful, neutral, and happy. They preferred wider social distance for Asian compared to Caucasian faces. Altogether, findings indicated that during the COVID-19 pandemic mask wearing decreased recognition of faces and emotional expressions, negatively impacting communication among people from different ethnicities. © 2023 Asian Association of Social Psychology and John Wiley & Sons Australia, Ltd.

10.
51st International Congress and Exposition on Noise Control Engineering, Internoise 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2275943

ABSTRACT

Facemasks are personal protective equipment worn to reduce the risk of the transmission of Covid-19. University students and teachers/lecturers in Serbia are required to wear facemasks in class at all times. However, such practice may cause challenges in student-teacher communication. We present students' experiences regarding speech intelligibility in the educational setting. We distributed an anonymous online questionnaire among students from various universities. Speaking with a facemask in class creates communication challenges for teachers and students alike. Students claim that teachers often have difficulties understanding students who speak wearing masks;teachers often ask students to repeat the sentence, and teachers often ask students to speak louder. Similarly, when teachers talk with their facemasks, students often report not hearing or understanding teachers back. In turn, students would ask teachers to repeat the sentence and raise voices. Students pay more attention to teachers' facial expressions, hand gestures, body language, and tone of voice. Students tend to engage their non-verbal interaction skills more often to facilitate communication. We further discuss the differences regarding students' gender and the type of facemask typically worn. We express concern that the inability to communicate clearly may cause annoyance and frustration in the academic setting. © 2022 Internoise 2022 - 51st International Congress and Exposition on Noise Control Engineering. All rights reserved.

11.
5th IEEE International Image Processing, Applications and Systems Conference, IPAS 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2270648

ABSTRACT

Availability of the internet and quality of content attracted more learners to online platforms that are stimulated by COVID-19. Students of different cognitive capabilities join the learning process. However, it is challenging for the instructor to identify the level of comprehension of the individual learner, specifically when they waver in responding to feedback. The learner's facial expressions relate to content comprehension and engagement. This paper presents use of the vision transformer (ViT) to model automatic estimation of student engagement by learning the end-to-end features from facial images. The ViT architecture is used to enlarge the receptive field of the architecture by exploiting the multi-head attention operations. The model is trained using various loss functions to handle class imbalance. The ViT is evaluated on Dataset for Affective States in E-Environments (DAiSEE);it outperformed frame level baseline result by approximately 8% and the other two video level benchmarks by 8.78% and 2.78% achieving an overall accuracy of 55.18%. In addition, ViT with focal loss was also able to produce well distribution among classes except for one minority class. © 2022 IEEE.

12.
Lecture Notes on Data Engineering and Communications Technologies ; 153:568-574, 2023.
Article in English | Scopus | ID: covidwho-2268937

ABSTRACT

Due to the outbreak of the COVID-19 novel coronavirus, the restrictions on population entry and exit have resulted in most elderly people staying at home alone, causing them a lot of inconvenience. Aiming at the problem that the elderly living alone at home may cause various diseases due to negative emotions but cannot be detected and solved in time, a method of detecting the facial expressions of the elderly is proposed to determine whether the elderly need timely care. YOLOX is the latest generation of YOLO series target detectors released by Megvii Technology in July 2021. It adopts the latest technology in the industry in recent years and surpasses existing similar products in performance and accuracy. If the YOLOX detector can be applied to the health monitoring of the elderly living alone under the current epidemic situation, it will be of great significance to improve detection rate and accuracy of detection and reduce labor costs. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

13.
19th IEEE India Council International Conference, INDICON 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2283899

ABSTRACT

Detecting facial expressions is a vital aspect of interpersonal communication. Automatic facial emotion recognition (FER) systems for detecting and analyzing human behavior have been a subject of study for the past decade, and have played key roles in healthcare, crime detection, and other use cases. With the worldwide spread of the COVID-19 pandemic, wearing face masks while interacting in public spaces has become recommended behavior to protect against infection. Therefore, improving existing FER systems to tackle mask occlusion is an extremely important task. In this paper, we analyze how well existing CNN models for FER fare with masked occlusion and present deep CNN architectures to solve this task. We also test some methods to reduce model overfitting, such as data augmentation and dataset balancing. The main metric used to compare the models is accuracy, and the dataset used here is FER2013. Images from FER2013 were covered by masks using a certain transformation, resulting in a new dataset, MFER2013. From our evaluation and experimentation, we found that existing models need to be modified before they can achieve good accuracy on masked datasets. By improving the architecture of the base CNN, we were able to achieve a significantly improved accuracy. © 2022 IEEE.

14.
50th Annual Conference of the European Society for Engineering Education, SEFI 2022 ; : 1696-1703, 2022.
Article in English | Scopus | ID: covidwho-2283484

ABSTRACT

We propose a method that uses an emotion analysis for PBL education. The emotion analysis is a method of analyzing a person's emotions from the person's remarks or facial expressions. In this method, teachers understand the situation of students from the results of the emotion analysis and give accurate advice. PBL education often involves group activities. The students conducted groups discuss, propose ideas, select ideas, and make the products. However, not all students are able to participate in discussions and express their opinions. It is the teacher's duty to provide guidance to such students. Therefore, we propose the use of the emotion analysis techniques to identify and guide students who have problems, such as those who cannot participate in discussions. The method is possible for one teacher to grasp multiple groups at the same time and to help developing the students' ability to learn. Under COVID-19, face-to-face classes were restricted. Online classes using Zoom etc. have also been introduced in PBL education. In online classes, it is difficult to grasp the situation of students. This was a big difference from face-to-face classes. So we looked at ways to keep track of the situation for all students. This is because the gap between students who are willing to take classes and those who are reluctant to take classes has widened due to the shift to online classes. As a result of the adaption to the classes, the number of students who actively participate in the classes has increased. The effectiveness of the proposed method was confirmed. © 2022 SEFI 2022 - 50th Annual Conference of the European Society for Engineering Education, Proceedings. All rights reserved.

15.
15th International Conference on COMmunication Systems and NETworkS, COMSNETS 2023 ; : 462-465, 2023.
Article in English | Scopus | ID: covidwho-2281703

ABSTRACT

Due to the Covid-19 pandemic, people have been forced to move to online spaces to attend classes or meetings and so on. The effectiveness of online classes depends on the engagement level of students. A straightforward way to monitor the engagement is to observe students' facial expressions, eye gazes, head gesticulations, hand movements, and body movements through their video feed. However, video-based engagement detection has limitations, such as being influenced by video backgrounds, lighting conditions, camera angles, unwillingness to open the camera, etc. In this work, we propose a non-intrusive mechanism of estimating engagement level by monitoring the head gesticulations through channel state information (CSI) of WiFi signals. First, we conduct an anonymous survey to investigate whether the head gesticulation pattern is correlated with engagement. We then develop models to recognize head gesticulations through CSI. Later, we plan to correlate the head gesticulation pattern with the instructor's intent to estimate the students' engagement. © 2023 IEEE.

16.
Traitement du Signal ; 39(6):1929-1941, 2022.
Article in English | Scopus | ID: covidwho-2247854

ABSTRACT

Online education has become increasingly common due to the Covid-19 pandemic. A key difference between online and face-to-face education is that instructors often cannot see their students' facial expressions online. This is problematic because facial expressions can help an instructor gauge engagement and understanding. Therefore, it would be useful to find a mechanism whereby students' facial expressions during an online lecture could be monitored. This information can be used as feedback for teachers to change a particular teaching method or maintain another method. This research presents a system that can automatically distinguish students' facial expressions. These comprise eight expressions (anger, attention, disgust, fear, happiness, neutrality, sadness, and surprise). The data for this research was collected from pictures of 70 university students' facial expressions. The data included 6720 images of students' faces distributed equally among the eight expressions mentioned above, that is, 840 images for each category. In this paper, pre-trained deep learning networks (AlexNet, MobileNetV2, GoogleNet, ResNet18, ResNet50, and VGG16) with transfer learning (TL) and K-fold validation (KFCV) were used for recognizing the facial expressions of students. The experiments were conducted using MATLAB 2021a and the best results were recorded by ResNet18 for F1-score and for AUC curve 99%, and 100% respectively. © 2022 Lavoisier. All rights reserved.

17.
Multimed Tools Appl ; : 1-30, 2022 Sep 09.
Article in English | MEDLINE | ID: covidwho-2261661

ABSTRACT

The dramatic impact of the COVID-19 pandemic has resulted in the closure of physical classrooms and teaching methods being shifted to the online medium.To make the online learning environment more interactive, just like traditional offline classrooms, it is essential to ensure the proper engagement of students during online learning sessions.This paper proposes a deep learning-based approach using facial emotions to detect the real-time engagement of online learners. This is done by analysing the students' facial expressions to classify their emotions throughout the online learning session. The facial emotion recognition information is used to calculate the engagement index (EI) to predict two engagement states "Engaged" and "Disengaged". Different deep learning models such as Inception-V3, VGG19 and ResNet-50 are evaluated and compared to get the best predictive classification model for real-time engagement detection. Varied benchmarked datasets such as FER-2013, CK+ and RAF-DB are used to gauge the overall performance and accuracy of the proposed system. Experimental results showed that the proposed system achieves an accuracy of 89.11%, 90.14% and 92.32% for Inception-V3, VGG19 and ResNet-50, respectively, on benchmarked datasets and our own created dataset. ResNet-50 outperforms all others with an accuracy of 92.3% for facial emotions classification in real-time learning scenarios.

18.
Multimed Tools Appl ; : 1-19, 2022 Oct 22.
Article in English | MEDLINE | ID: covidwho-2253965

ABSTRACT

People use various nonverbal communicative channels to convey emotions, among which facial expressions are considered the most important ones. Thus, automatic Facial Expression Recognition (FER) is a fundamental task to increase the perceptive skills of computers, especially in human-computer interaction. Like humans, state-of-art FER systems are able to recognize emotions from the entire face of a person. However, the COVID-19 pandemic has imposed a massive use of face masks that help in preventing infection but may hamper social communication and make the recognition of facial expressions a very challenging task due to facial occlusion. In this paper we propose a FER system capable to recognize emotions from masked faces. The system checks for the presence of a mask on the face image and, in case of mask detection, it extracts the eyes region and recognizes the emotion only considering that portion of the face. The effectiveness of the developed FER system was tested in recognizing emotions and their valence only from the eyes region and comparing the results when considering the entire face. As it was expected, emotions that are related mainly to the mouth region (e.g., disgust) are barely recognized, while positive emotions are better identified by considering only the eyes region. Moreover, we compared the results of our FER system to the human annotation of emotions on masked faces. We found out that the FER system outperforms the human annotation, thus showing that the model is able to learn proper features for each emotion leveraging only the eyes region.

19.
Multimed Tools Appl ; : 1-27, 2023 Feb 10.
Article in English | MEDLINE | ID: covidwho-2241199

ABSTRACT

Due to the COVID-19 crisis, the education sector has been shifted to a virtual environment. Monitoring the engagement level and providing regular feedback during e-classes is one of the major concerns, as this facility lacks in the e-learning environment due to no physical observation of the teacher. According to present study, an engagement detection system to ensure that the students get immediate feedback during e-Learning. Our proposed engagement system analyses the student's behaviour throughout the e-Learning session. The proposed novel approach evaluates three modalities based on the student's behaviour, such as facial expression, eye blink count, and head movement, from the live video streams to predict student engagement in e-learning. The proposed system is implemented based on deep-learning approaches such as VGG-19 and ResNet-50 for facial emotion recognition and the facial landmark approach for eye-blinking and head movement detection. The results from different modalities (for which the algorithms are proposed) are combined to determine the EI (engagement index). Based on EI value, an engaged or disengaged state is predicted. The present study suggests that the proposed facial cues-based multimodal system accurately determines student engagement in real time. The experimental research achieved an accuracy of 92.58% and showed that the proposed engagement detection approach significantly outperforms the existing approaches.

20.
Vis Inform ; 2022 Oct 24.
Article in English | MEDLINE | ID: covidwho-2243089

ABSTRACT

Digital learning is becoming increasingly important in the crisis COVID-19 and is widespread in most countries. The proliferation of smart devices and 5G telecommunications systems are contributing to the development of digital learning systems as an alternative to traditional learning systems. Digital learning includes blended learning, online learning, and personalized learning which mainly depends on the use of new technologies and strategies, so digital learning is widely developed to improve education and combat emerging disasters such as COVID-19 diseases. Despite the tremendous benefits of digital learning, there are many obstacles related to the lack of digitized curriculum and collaboration between teachers and students. Therefore, many attempts have been made to improve the learning outcomes through the following strategies: collaboration, teacher convenience, personalized learning, cost and time savings through professional development, and modeling. In this study, facial expressions and heart rate are used to measure the effectiveness of digital learning systems and the level of learners' engagement in learning environments. The results showed that the proposed approach outperformed the known related works in terms of learning effectiveness. The results of this research can be used to develop a digital learning environment.

SELECTION OF CITATIONS
SEARCH DETAIL